montreal ai ethics institute
La veille de la cybersécurité
What do we talk about when we talk about AI ethics? Just like AI itself, definitions for AI ethics seem to abound. A definition that seems to have garnered some consensus is that AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technologies. If this definition seems ambiguous to you, you aren't alone. There is an array of issues that people tend to associate with the term "AI ethics," ranging from bias in algorithms, to the asymmetrical or unlawful use of AI, environmental impact of AI technology and national and international policies around it.
The state of AI ethics: The principles, the tools, the regulations
What do we talk about when we talk about AI ethics? Just like AI itself, definitions for AI ethics seem to abound. A definition that seems to have garnered some consensus is that AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technologies. If this definition seems ambiguous to you, you aren't alone. There is an array of issues that people tend to associate with the term "AI ethics," ranging from bias in algorithms, to the asymmetrical or unlawful use of AI, environmental impact of AI technology and national and international policies around it.
- North America > Canada > Quebec > Montreal (0.16)
- North America > United States (0.14)
- Oceania > New Zealand (0.04)
- Asia > China (0.04)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.51)
Becoming an upstander in AI Ethics
We don't remember the words of our enemies but the silence of our friends. As we've seen the enormous upheaval in the field of AI ethics over the past 3 months, I think it behooves us to think a little deeply about the role all of us can play in making a meaningful, positive impact on the world. This idea of becoming an upstander in AI ethics is particularly powerful and I believe that in 2021, this is the right way to help create a healthier ecosystem for us all. As I had spoken about in my piece on Why Civic Competence is needed in AI ethics in 2021, I believe that it comes with an additional rider that we need to act on that competence. We routinely come across scenarios where we can raise our voices (respectfully) and point our injustices when we see them happen around us or to us.
Making Responsible AI the Norm rather than the Exception
This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.
- North America > United States (0.71)
- North America > Canada > Quebec > Montreal (0.27)
- Europe > France (0.04)
- Law (0.93)
- Government > Military (0.48)
- Government > Regional Government > North America Government > United States Government (0.31)
AI ethics groups are repeating one of society's classic mistakes – MIT Technology Review
International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates. AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today--of which there are dozens--aim to help everyone benefit from this technology, and to prevent it from causing harm. Generally speaking, they do this by creating guidelines and principles for developers, funders, and regulators to follow.
- North America > Canada > Quebec > Montreal (0.07)
- Europe > Middle East (0.06)
- Asia > Middle East (0.06)
- (9 more...)
Report prepared by the Montreal AI Ethics Institute (MAIEI) for Publication Norms for Responsible AI by Partnership on AI
Gupta, Abhishek, Lanteigne, Camylle, Heath, Victoria
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
- North America > Canada > Quebec > Montreal (0.62)
- North America > United States > New York (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (4 more...)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (3 more...)
Report prepared by the Montreal AI Ethics Institute In Response to Mila's Proposal for a Contact Tracing App
Cohen, Allison, Gupta, Abhishek
Contact tracing has grown in popularity as a promising solution to the COVID-19 pandemic. The benefits of automated contact tracing are two-fold. Contact tracing promises to reduce the number of infections by being able to: 1) systematically identify all of those that have been in contact with someone who has had COVID; and, 2) ensure those that have been exposed to the virus do not unknowingly infect others. "COVI" is the name of a recent contact tracing app developed by Mila and was proposed to help combat COVID-19 in Canada. The app was designed to inform each individual of their relative risk of being infected with the virus, which Mila claimed would empower citizens to make informed decisions about their movement and allow for a data-driven approach to public health policy; all the while ensuring data is safeguarded from governments, companies, and individuals. This article will provide a critical response to Mila's COVI White Paper. Specifically, this article will discuss: the extent to which diversity has been considered in the design of the app, assumptions surrounding users' interaction with the app and the app's utility, as well as unanswered questions surrounding transparency, accountability, and security. We see this as an opportunity to supplement the excellent risk analysis done by the COVI team to surface insights that can be applied to other contact- and proximity-tracing apps that are being developed and deployed across the world. Our hope is that, through a meaningful dialogue, we can ultimately help organizations develop better solutions that respect the fundamental rights and values of the communities these solutions are meant to serve.
Montreal AI Ethics Institute suggests ways to counter bias in AI models
The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity's place in an algorithm-driven world, today published the inaugural edition of its State of AI Ethics report. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter. The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they're likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings. "Demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data," the report reads.
- North America > Canada > Quebec > Montreal (0.61)
- North America > United States (0.49)
The State of AI Ethics Report (June 2020)
Gupta, Abhishek, Lanteigne, Camylle, Heath, Victoria, Ganapini, Marianna Bergamaschi, Galinkin, Erick, Cohen, Allison, De Gasperis, Tania, Akif, Mo, Butalid, Renjie
These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain.
- North America > Canada > Quebec > Montreal (0.24)
- North America > United States > Maryland (0.04)
- Asia > China (0.04)
- (16 more...)
- Research Report > New Finding (0.92)
- Research Report > Promising Solution (0.67)
- Transportation > Ground > Road (1.00)
- Social Sector (1.00)
- Media > News (1.00)
- (18 more...)
Response by the Montreal AI Ethics Institute to the European Commission's Whitepaper on AI
Gupta, Abhishek, Lanteigne, Camylle
In February 2020, the European Commission (EC) published a white paper entitled, On Artificial Intelligence - A European approach to excellence and trust. This paper outlines the EC's policy options for the promotion and adoption of artificial intelligence (AI) in the European Union. The Montreal AI Ethics Institute (MAIEI) reviewed this paper and published a response addressing the EC's plans to build an "ecosystem of excellence" and an "ecosystem of trust," as well as the safety and liability implications of AI, the internet of things (IoT), and robotics. MAIEI provides 15 recommendations in relation to the sections outlined above, including: 1) focus efforts on the research and innovation community, member states, and the private sector; 2) create alignment between trading partners' policies and EU policies; 3) analyze the gaps in the ecosystem between theoretical frameworks and approaches to building trustworthy AI; 4) focus on coordination and policy alignment; 5) focus on mechanisms that promote private and secure sharing of data; 6) create a network of AI research excellence centres to strengthen the research and innovation community; 7) promote knowledge transfer and develop AI expertise through Digital Innovation Hubs; 8) add nuance to the discussion regarding the opacity of AI systems; 9) create a process for individuals to appeal an AI system's decision or output; 10) implement new rules and strengthen existing regulations; 11) ban the use of facial recognition technology; 12) hold all AI systems to similar standards and compulsory requirements; 13) ensure biometric identification systems fulfill the purpose for which they are implemented; 14) implement a voluntary labelling system for systems that are not considered high-risk; 15) appoint individuals to the oversight process who understand AI systems well and are able to communicate potential risks.
- North America > Canada > Quebec > Montreal (0.63)
- North America > Canada > Alberta (0.04)
- Europe > United Kingdom > Scotland (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (1.00)